Classifying and modeling texture images, especially those with significant rotation, illumination, scale, and view-point variations, is a hot topic in the computer vision field. Inspired by local graph structure (LGS), local ternary patterns (LTP), and their variants, this paper proposes a novel image feature descriptor for texture and material classification, which we call Petersen Graph Multi-Orientation based Multi-Scale Ternary Pattern (PGMO-MSTP). PGMO-MSTP is a histogram representation that efficiently encodes the joint information within an image across feature and scale spaces, exploiting the concepts of both LTP-like and LGS-like descriptors, in order to overcome the shortcomings of these approaches. We first designed two single-scale horizontal and vertical Petersen Graph-based Ternary Pattern descriptors ( PGTPh and PGTPv ). The essence of PGTPh and PGTPv is to encode each 5×5 image patch, extending the ideas of the LTP and LGS concepts, according to relationships between pixels sampled in a variety of spatial arrangements (i.e., up, down, left, and right) of Petersen graph-shaped oriented sampling structures. The histograms obtained from the single-scale descriptors PGTPh and PGTPv are then combined, in order to build the effective multi-scale PGMO-MSTP model. Extensive experiments are conducted on sixteen challenging texture data sets, demonstrating that PGMO-MSTP can outperform state-of-the-art handcrafted texture descriptors and deep learning-based feature extraction approaches. Moreover, a statistical comparison based on the Wilcoxon signed rank test demonstrates that PGMO-MSTP performed the best over all tested data sets.Two delay-and-sum beamformers for 3-D synthetic aperture imaging with row-column addressed arrays are presented. Both beamformers are software implementations for graphics processing unit (GPU) execution with dynamic apodizations and 3rd order polynomial subsample interpolation. The first beamformer was written in the MATLAB programming language and the second was written in C/C++ with the compute unified device architecture (CUDA) extensions by NVIDIA. Performance was measured as volume rate and sample throughput on three different GPUs a 1050 Ti, a 1080 Ti, and a TITAN V. The beamformers were evaluated across 112 combinations of output geometry, depth range, transducer array size, number of virtual sources, floating point precision, and Nyquist rate or inphase/ quadrature beamforming using analytic signals. Real-time imaging defined as more than 30 volumes per second was attained by the CUDA beamformer on the three GPUs for 13, 27, and 43 setups, respectively. The MATLAB beamformer did not attain real-time imaging for any setup. The median, single precision sample throughput of the CUDA beamformer was 4.9, 20.8, and 33.5 gigasamples per second on the three GPUs, respectively. The CUDA beamformer's throughput was an order of magnitude higher than that of the MATLAB beamformer.A new local optimization (LO) technique, called Graph-Cut RANSAC, is proposed for RANSAC-like robust geometric model estimation. https://www.selleckchem.com/products/eg-011.html To select potential inliers, the proposed LO step applies the graph-cut algorithm, minimizing a labeling energy functional whenever a new so-far-the-best model is found. The energy originates from both the point-to-model residuals and the spatial coherence of the points. The proposed LO step is conceptually simple, easy to implement, globally optimal and efficient. Graph-Cut RANSAC is combined with the bells and whistles of USAC. It has been tested on a number of publicly available datasets on a range of problems - homography, fundamental and essential matrix estimation. It is more geometrically accurate than state-of-the-art methods and runs faster or with similar speed to less accurate alternatives.The research in image quality assessment (IQA) has a long history, and significant progress has been made by leveraging recent advances in deep neural networks (DNNs). Despite high correlation numbers on existing IQA datasets, DNN-based models may be easily falsified in the group maximum differentiation (gMAD) competition with strong counterexamples being identified. Here we show that gMAD examples can be used to improve blind IQA (BIQA) methods. Specifically, we first pre-train a DNN-based BIQA model using multiple noisy annotators, and fine-tune it on multiple subject-rated databases of synthetically distorted images, resulting in a top-performing baseline model. We then seek pairs of images by comparing the baseline model with a set of full-reference IQA methods in gMAD. We query ground truth quality annotations for the selected images in a well controlled laboratory environment, and further fine-tune the baseline on the combination of human-rated images from gMAD and existing databases. This process may be iterated, enabling active and progressive fine-tuning from gMAD examples for BIQA. We demonstrate the feasibility of our active learning scheme on a large-scale unlabeled image set, and show that the fine-tuned method achieves improved generalizability in gMAD, without destroying performance on previously trained databases.Bioluminescence tomography (BLT) is a promising modality that is designed to provide non-invasive quantitative three-dimensional information regarding the tumor distribution in living animals. However, BLT suffers from inferior reconstructions due to its ill-posedness. This study aims to improve the reconstruction performance of BLT.
We propose an adaptive grouping block sparse Bayesian learning (AGBSBL) method, which incorporates the sparsity prior, correlation of neighboring mesh nodes, and anatomical structure prior to balance the sparsity and morphology in BLT. Specifically, an adaptive grouping prior model is proposed to adjust the grouping according to the intensity of the mesh nodes during the optimization process.
Numerical simulations and in vivo experiments demonstrate that AGBSBL yields a high position and morphology recovery accuracy, stability, and practicality.
The proposed method is a robust and effective reconstruction algorithm for BLT. Moreover, the proposed adaptive grouping strategy can further increase the practicality of BLT in biomedical applications.